Goto

Collaborating Authors

 kwta network


On the K-Winners-Take-All Network

Neural Information Processing Systems

We present and rigorously analyze a generalization of the Winner(cid:173) Take-All Network: the K-Winners-Take-All Network. This net(cid:173) work identifies the K largest of a set of N real numbers. The network model used is the continuous Hopfield model. The Winner-Take-All Network is a network which identifies the largest of N real numbers. Winner-Take-All Networks have been developed using various neural networks models (Grossberg-73, Lippman-87, Feldman-82, Lazzaro-89).


Analog Neural Networks as Decoders

Neural Information Processing Systems

We have previously demonstrated the use of a continuous Hopfield neural network as a K-Winner-Take-All (KWTA) network [Majani et al., 1989, Erlanson and Abu(cid:173) Mostafa, 1988}. Given an input of N real numbers, such a network will converge to a vector of K positive one components and (N - K) negative one components, with the positive positions indicating the K largest input components. In addition, we have shown that the () such vectors are the only stable states of the system. One application of the KWTA network is the analog decoding of error-correcting codes [Majani et al., 1989, Platt and Hopfield, 1986]. Here, a known set of vectors (the codewords) are transmitted over a noisy channel. At the receiver's end of the channel, the initial vector must be reconstructed from the noisy vector.


Analog Neural Networks as Decoders

Erlanson, Ruth, Abu-Mostafa, Yaser

Neural Information Processing Systems

In turn, KWTA networks can be used as decoders of a class of nonlinear error-correcting codes. By interconnecting such KWTA networks, we can construct decoders capable of decoding more powerful codes. We consider several families of interconnected KWTA networks, analyze their performance in terms of coding theory metrics, and consider the feasibility of embedding such networks in VLSI technologies.


Analog Neural Networks as Decoders

Erlanson, Ruth, Abu-Mostafa, Yaser

Neural Information Processing Systems

In turn, KWTA networks can be used as decoders of a class of nonlinear error-correcting codes. By interconnecting such KWTA networks, we can construct decoders capable of decoding more powerful codes. We consider several families of interconnected KWTA networks, analyze their performance in terms of coding theory metrics, and consider the feasibility of embedding such networks in VLSI technologies.



On the K-Winners-Take-All Network

Majani, E., Erlanson, Ruth, Abu-Mostafa, Yaser S.

Neural Information Processing Systems

We present and rigorously analyze a generalization of the Winner Take-All Network: the K-Winners-Take-All Network. This network identifies the K largest of a set of N real numbers. The network model used is the continuous Hopfield model.


On the K-Winners-Take-All Network

Majani, E., Erlanson, Ruth, Abu-Mostafa, Yaser S.

Neural Information Processing Systems

We present and rigorously analyze a generalization of the Winner Take-All Network: the K-Winners-Take-All Network. This network identifies the K largest of a set of N real numbers. The network model used is the continuous Hopfield model.